Goodness of Fit Test for Gumbel Distribution Based on Kullback-Leibler Information Using Several Different Estimators
نویسنده
چکیده
In this paper, our objective is to test the statistical hypothesis : ( ) ( ) for all against : ( ) ( ) 1 H F x F x x H F x F x o o o = ≠ for some , x where ( ) F x o is a known distribution function. In this study, a goodness of fit test statistics for Gumbel distribution based on Kullback-Leibler information is studied. The performance of the test under simple random sampling is investigated using Monte Carlo simulation. The Gumbel parameters are estimated by using several methods of estimation such as maximum likelihood, order statistics, moments, and L-moments. Ten different distributions are considered under the alternative hypothesis. For all the distributions considered, it is found that the test statistics based on estimators found by moment and order statistic methods have the highest power, except for weibull and Lognormal distributions.
منابع مشابه
Testing Normality Based on Kullback-Leibler Information With Progressively Type-II Censored Data
We will use the joint entropy of progressively censored order statistics in terms of an incomplete integral of the hazard function, and provide a simple estimate of the joint entropy of progressively Type-II censored data, has been introduced by Balakrishnan et al. (2007). Then We construct a goodness-of-fit test statistic based on Kullback-Leibler information for normal distribution. Finally, ...
متن کاملSome Statistical Inferences on the Parameters of Records Weibull Distribution Using Entropy
In this paper, we discuss different estimators of the records Weibull distribution parameters and also we apply the Kullback-Leibler divergence of survival function method to estimate record Weibull parameters. Finally, these estimators have been compared using Monte Carlo simulation and suggested good estimators.
متن کاملOn Some Properties of Goodness of Fit Measures Based on Statistical Entropy
Goodness of fit tests can be categorized under several ways. One categorization may be due to the differences between observed and expected frequencies. The other categorization may be based upon the differences between some values of distribution functions. Still the other one may be based upon the divergence of one distribution from the other. Some widely used and well known divergences like ...
متن کاملUsing Kullback-Leibler distance for performance evaluation of search designs
This paper considers the search problem, introduced by Srivastava cite{Sr}. This is a model discrimination problem. In the context of search linear models, discrimination ability of search designs has been studied by several researchers. Some criteria have been developed to measure this capability, however, they are restricted in a sense of being able to work for searching only one possibl...
متن کاملTracking Interval for Type II Hybrid Censoring Scheme
The purpose of this paper is to obtain the tracking interval for difference of expected Kullback-Leibler risks of two models under Type II hybrid censoring scheme. This interval helps us to evaluate proposed models in comparison with each other. We drive a statistic which tracks the difference of expected Kullback–Leibler risks between maximum likelihood estimators of the distribution in two diff...
متن کامل